A Globally Convergent Stabilized Sqp Method: Superlinear Convergence

نویسندگان

  • Philip E. Gill
  • Vyacheslav Kungurtsev
  • Daniel P. Robinson
چکیده

Regularized and stabilized sequential quadratic programming (SQP) methods are two classes of methods designed to resolve the numerical and theoretical difficulties associated with ill-posed or degenerate nonlinear optimization problems. Recently, a regularized SQP method has been proposed that allows convergence to points satisfying certain second-order KKT conditions (SIAM J. Optim., 23(4):1983–2010, 2013). The method is formulated as a regularized SQP method with an implicit safeguarding strategy based on minimizing a bound-constrained primal-dual augmented Lagrangian. The method involves a flexible line search along a direction formed from the solution of a regularized quadratic programming subproblem and, when one exists, a direction of negative curvature for the primal-dual augmented Lagrangian. With an appropriate choice of termination condition, the method terminates in a finite number of iterations under weak assumptions on the problem functions. Safeguarding becomes relevant only when the iterates are converging to an infeasible stationary point of the norm of the constraint violations. Otherwise, the method terminates with a point that either satisfies the second-order necessary conditions for optimality, or fails to satisfy a weak second-order constraint qualification. The purpose of this paper is to establish the conditions under which this second-order regularized SQP algorithm is equivalent to the stabilized SQP method. It is shown that under conditions that are no stronger than those required by conventional stabilized SQP methods, the regularized SQP method has superlinear local convergence. The required convergence properties are obtained by allowing a small relaxation of the optimality conditions for the quadratic programming subproblem in the neighborhood of a solution.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Globalizing Stabilized Sqp by Smooth Primal-dual Exact Penalty Function

An iteration of the stabilized sequential quadratic programming method (sSQP) consists in solving a certain quadratic program in the primal-dual space, regularized in the dual variables. The advantage with respect to the classical sequential quadratic programming (SQP) is that no constraint qualifications are required for fast local convergence (i.e., the problem can be degenerate). In particul...

متن کامل

A globally and superlinearly convergent trust-region SQP method without a penalty function for nonlinearly constrained optimization

In this paper, we propose a new trust-region SQP method, which uses no penalty function, for solving nonlinearly constrained optimization problem. Our method consists of alternate two algorithms. Specifically, we alternately proceed the feasibility restoration algorithm and the objective function minimization algorithm. The global and superlinear convergence property of the proposed method is s...

متن کامل

Sharp Primal Superlinear Convergence Results for Some Newtonian Methods for Constrained Optimization

As is well known, Q-superlinear or Q-quadratic convergence of the primal-dual sequence generated by an optimization algorithm does not, in general, imply Q-superlinear convergence of the primal part. Primal convergence, however, is often of particular interest. For the sequential quadratic programming (SQP) algorithm, local primal-dual quadratic convergence can be established under the assumpti...

متن کامل

A stabilized SQP method: superlinear convergence

Regularized and stabilized sequential quadratic programming (SQP) methods are two classes of methods designed to resolve the numerical and theoretical difficulties associated with ill-posed or degenerate nonlinear optimization problems. Recently, a stabilized SQP method has been proposed that allows convergence to points satisfying certain secondorder KKT conditions (Report CCoM 13-04, Center f...

متن کامل

A Second-derivative Trust-region Sqp Method with a “trust-region-free” Predictor Step

In (NAR 08/18 and 08/21, Oxford University Computing Laboratory, 2008) we introduced a second-derivative SQP method (S2QP) for solving nonlinear nonconvex optimization problems. We proved that the method is globally convergent and locally superlinearly convergent under standard assumptions. A critical component of the algorithm is the so-called predictor step, which is computed from a strictly ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014